11 research outputs found

    Genetic Programming + Proof Search = Automatic Improvement

    Get PDF
    Search Based Software Engineering techniques are emerging as important tools for software maintenance. Foremost among these is Genetic Improvement, which has historically applied the stochastic techniques of Genetic Programming to optimize pre-existing program code. Previous work in this area has not generally preserved program semantics and this article describes an alternative to the traditional mutation operators used, employing deterministic proof search in the sequent calculus to yield semantics-preserving transformations on algebraic data types. Two case studies are described, both of which are applicable to the recently-introduced `grow and graft' technique of Genetic Improvement: the first extends the expressiveness of the `grafting' phase and the second transforms the representation of a list data type to yield an asymptotic efficiency improvement

    Structured Decompositions: Structural and Algorithmic Compositionality

    Full text link
    We introduce structured decompositions: category-theoretic generalizations of many combinatorial invariants -- including tree-width, layered tree-width, co-tree-width and graph decomposition width -- which have played a central role in the study of structural and algorithmic compositionality in both graph theory and parameterized complexity. Structured decompositions allow us to generalize combinatorial invariants to new settings (for example decompositions of matroids) in which they describe algorithmically useful structural compositionality. As an application of our theory we prove an algorithmic meta theorem for the Sub_P-composition problem which, when instantiated in the category of graphs, yields compositional algorithms for NP-hard problems such as: Maximum Bipartite Subgraph, Maximum Planar Subgraph and Longest Path

    Can Virialization Shocks be Detected Around Galaxy Clusters Through the Sunyaev-Zel'dovich Effect?

    Get PDF
    In cosmological structure formation models, massive non-linear objects in the process of formation, such as galaxy clusters, are surrounded by large-scale shocks at or around the expected virial radius. Direct observational evidence for such virial shocks is currently lacking, but we show here that their presence can be inferred from future, high resolution, high-sensitivity observations of the Sunyaev-Zel'dovich (SZ) effect in galaxy clusters. We study the detectability of virial shocks in mock SZ maps, using simple models of cluster structure (gas density and temperature distributions) and noise (background and foreground galaxy clusters projected along the line of sight, as well as the cosmic microwave background anisotropies). We find that at an angular resolution of 2'' and sensitivity of 10 micro K, expected to be reached at ~ 100 GHz frequencies in a ~ 20 hr integration with the forthcoming ALMA instrument, virial shocks associated with massive M ~ 10^15 M_Sun clusters will stand out from the noise, and can be detected at high significance. More generally, our results imply that the projected SZ surface brightness profile in future, high-resolution experiments will provide sensitive constraints on the density profile of cluster gas.Comment: 15 pages, submitted to Ap

    Finding the Electromagnetic Counterparts of Cosmological Standard Sirens

    Get PDF
    The gravitational waves (GW) emitted during the coalescence of supermassive black holes (SMBHs) in the mass range 10^4-10^7 M_sun will be detectable out to high redshifts with LISA. We calculate the size and orientation of the three-dimensional error ellipse in solid angle and redshift within which the LISA event could be localized using the GW signatures alone. We take into account uncertainties in LISA's measurements of the luminosity distance and direction to the source, in the background cosmology, in weak gravitational lensing magnification due to inhomogeneities along the line of sight, and potential source peculiar velocities. We find that weak lensing errors exceed other sources of uncertainties by nearly an order of magnitude. Under the plausible assumption that BH mergers are accompanied by gas accretion leading to Eddington-limited quasar activity, we then compute the number of quasars that would be found in a typical LISA error volume, as a function of BH mass and redshift. We find that low redshifts offer the best opportunities to identify quasar counterparts to cosmological standard sirens, and that the LISA error volume will typically contain a single near-Eddington quasar at z=1. This will allow a straightforward test of the hypothesis that BH mergers are accompanied by bright quasar activity and, if the hypothesis proves correct, will guarantee the identification of a unique quasar counterpart. This would yield unprecedented tests of the physics of SMBH accretion, and offer an alternative method to precisely constrain cosmological parameters [abridged].Comment: 11 pages, 4 figures, version accepted for publication in Ap

    Embedded Dynamic Improvement

    Get PDF
    We discuss the useful role that can be played by a subtype of improvement programming, which we term 'Embedded Dynamic Improvement'. In this approach, developer-specified variation points define the scope of improvement. A search framework is embedded at these variation points, facilitating the creation of adaptive software that can respond online to changes in its execution environment
    corecore